2 research outputs found

    Mixed-Initiative Human-Automated Agents Teaming: Towards a Flexible Cooperation Framework

    Get PDF
    The recent progress in robotics and artificial intelligence raises the question of the efficient artificial agents interaction with humans. For instance, artificial intelligence has achieved technical advances in perception and decision making in several domains ranging from games to a variety of operational situations, (e.g. face recognition [51] and firefighting missions [23]). Such advanced automated systems still depend on human operators as far as complex tactical, legal or ethical decisions are concerned. Usually the human is considered as an ideal agent, that is able to take control in case of automated (artificial) agent's limit range of action or even failure (e.g embedded sensor failures or low confidence in identification tasks). However, this approach needs to be revised as revealed by several critical industrial events (e.g. aviation and nuclear power-plant) that were due to conflicts between humans and complex automated system [13]. In this context, this paper reviews some of our previous works related to human-automated agents interaction driving systems. More specifically, a mixed-initiative cooperation framework that considers agents' non-deterministic actions effects and inaccuracies about the human operator state estimation. This framework has demonstrated convincing results being a promising venue for enhancing human-automated agent(s) teaming
    corecore